Rational Process Models

نویسندگان

  • Edward Vul
  • Joshua B. Tenenbaum
  • Thomas L. Griffiths
  • Roger Levy
  • Craig R. M. McKenzie
چکیده

Rational, Bayesian accounts of cognition at the computational level have enjoyed much success in recent years: human behavior is consistent with optimal Bayesian agents in low-level perceptual and motor tasks as well as high level cognitive tasks like category and concept learning, language, and theory of mind. However, two challenges have thus far been ignored by these computational-level models. First, the “process” challenge – Bayesian models often assume unbounded cognitive resources available for computation, yet cognitive psychology has emphasized the severe limitations imposed on human cognition: How do models at the computational level relate to traditional models from cognitive psychology concerned with psychological mechanisms such as memory and attention? The second challenge is the “scaling” problem – research in machine learning and statistics has shown that exact computation is intractable for inference problems on the scale relevant to human cognition, indicating that people must be solving these problems approximately: How can Bayesian models of cognition scale up to problems of the size that the mind faces in the real world, beyond the small scales of typical laboratory tasks where these models are usually tested? This symposium brings together researchers from Machine Learning, Cognitive Science, Linguistics, and Psychology, who are working at the interface between the computational and algorithmic levels of description. The overarching theme is a new approach to answering both the “process” and “scaling” challenges by rational reverse-engineering of Bayesian algorithmic-level models. Rational or reverse-engineering analyses are by now familiar for computational-level questions, where they ask: “what is the ideal inference (or at least, what are rational inferences with good statistical properties) given the available information and task?” The answer to this computational-level question can often be described as some form of Bayesian inference, and models derived from these considerations have enjoyed success in explaining some aspects of human behavior. This symposium proposes an approach that asks the same question at the algorithmic level: “What is the ideal way to implement this inferential computation given constraints on space, time, energy, the scale of problem, etc?” The answer to these problems in Bayesian statistics and machine learning is usually some form of Monte Carlo. Monte Carlo sampling is a method for approximating probability distributions by simulating a stochastic process, with long-run properties reflecting the probability distribution being simulated. Sampling is a general strategy for approximating otherwise intractable statistical inferences with limited resources: This strategy may be applied to any inference problem and is more robust to the size of the problem than other numerical methods. Based on such reverse-engineering considerations, the panelists suggest that in a variety of domains (categorization – Griffiths; learning temporal structure – Steyvers; parsing language – Levy; and multiple object tracking – Vul) people adopt sampling algorithms to approximate optimal inference. One specific suggestion that cuts across the fields and topics of the speakers is that instead of representing a full posterior distribution, people keep track of a few sampled hypotheses. In the sequential tasks considered here, a sample-based representation of the posterior may be updated online with a particle-filtering (sequential Monte Carlo) strategy. Across the different domains and models considered in this symposium, this domain-general algorithm provides a cognitively plausible mechanism for approximating Bayes-optimal computations online. What’s most exciting is that these models make contact with (and even extend) the rich empirical paradigms of traditional cognitive psychology and can account for interesting new aspects of human behavior. The panelists in this symposium suggest that instead of producing ad hoc cognitive process models one at a time, one for each task, the development of process models can be guided by reverse-engineering considerations. Through rational analysis of algorithms for approximate Bayesian inference, we can link up Bayesian models with traditional process accounts in cognitive psychology and suggest how Bayesian

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Beyond ‘Funnel’ and ‘Fireworks’: ‘Water Ribbed Balloon’ as a New Metaphorical Approach to Innovation-in-Practice

Product innovation success has very much to do with the development of models ormetaphors that are able to guide actors. One can observe two traditions in thisregard: rational and non-rational models. Apparently in the former the model, suchas “development funnel”, is regarded as a mechanism and rigid applicable, picturinginnovation as an orderly, goal-oriented, value-neutral, and systematic pr...

متن کامل

Testing for Stochastic Non- Linearity in the Rational Expectations Permanent Income Hypothesis

The Rational Expectations Permanent Income Hypothesis implies that consumption follows a martingale. However, most empirical tests have rejected the hypothesis. Those empirical tests are based on linear models. If the data generating process is non-linear, conventional tests may not assess some of the randomness properly. As a result, inference based on conventional tests of linear models can b...

متن کامل

Rational Choice Theory: A Cultural Reconsideration

Economists have heralded the formulation of the expected utility theorem as a universal method of choice under uncertainty. In their seminal paper, Stigler and Becker (Stigler & Becker, 1977) declared that “human behavior can be explained by a generalized calculus of utility-maximizing behavior” (p.76). The universality of the rational choice theory has been widely criticized by psychologists, ...

متن کامل

Non Uniform Rational B Spline (NURBS) Based Non-Linear Analysis of Straight Beams with Mixed Formulations

Displacement finite element models of various beam theories have been developed traditionally using conventional finite element basis functions (i.e., cubic Hermite, equi-spaced Lagrange interpolation functions, or spectral/hp Legendre functions). Various finite element models of beams differ from each other in the choice of the interpolation functions used for the transverse deflection w, tota...

متن کامل

Solving Generalized Multivariate Linear Rational Expectations Models∗

We generalize the linear rational expectations solution method of Whiteman (1983) to the multivariate case. This facilitates the use of a generic exogenous driving process that must only satisfy covariance stationarity. Multivariate cross-equation restrictions linking the Wold representation of the exogenous process to the endogenous variables of the rational expectations model are obtained. We...

متن کامل

Unifying Rational Models of Categorization via the Hierarchical Dirichlet Process

Models of categorization make different representational assumptions, with categories being represented by prototypes, sets of exemplars, and everything in between. Rational models of categorization justify these representational assumptions in terms of different schemes for estimating probability distributions. However, they do not answer the question of which scheme should be used in represen...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009